[Previous] [Next] [Index] [Thread]

Re: Redundant web pages



Interesting question.  Here're some possibilities, some might fly today,
some might take a while to make provisions for:

1) Put a low TTL on round-robin A's.  Means your DNS servers get
pounded.

2) Set up two boxes.  Copy web data from one to the other, over your
protocol of choice (web, ftp, NFS, dual-ported SCSI, whatever).  Make
the first your "primary" web server.  Make the second periodically ping
the first.  If the first doesn't respond, make the second ifconfig
itself to the IP address of the first (!).  You may need to get your
router to flush its arp cache every so often.  You may also want to set
things up so the second can kill off the first - so the first doesn't
reboot and cause an IP address conflict - this could possibly be done
with a serial cable, and a break signal, or some slightly fancier
hardware to shut off the power.

3) Push for an extension to DNS, to handle "any of these N hosts are
dandy for satisfying requests to this address", ala MX records.  It
seems like something more general than protocol-specific facilities may
be called for.  At first blush, it seems that an MX record-analog, that
is a function not just of host, but also of port # and tcp/udp/whatever,
might work well.  Of course, DNS requests do not generally indicate what
service is going to be used, so the problem isn't Quite that simple.

4) Don't push for an extension to DNS, just use HESIOD records or
something, and push for a change in the popular browsers to inspect
those records, in preference to traditional A's, &c.

5) Buy some high availability solution - there are many for NFS service.
You may be able to retarget such hardware for http.  Many of them do
some variant of #2, above.

6) It seems like something like this could be done with some fanciness
in a router.  But then that router is a single point of failure...  And
you could, at that point, set up a simple packet-passing *ix box (or PC,
or Mac) that just shuttles packets to any of a list of hosts, checking
that they're up every so often...  If its on the same subnet, with the
*ix/pc/mac thing, you get two copies of each packet on the wire, but oh
well...

lazear@gateway.mitre.org wrote:
> 
> If one wants to provide a web page that is always available,
> what mechanisms are available to create this redundant capability?
> I know about round-robin DNS that gives a new address from a pool
> of servers, but caching the answer means a client won't ask for a
> new address for hours (so for them, the page is down).  What other
> means of having a redundant web page are there?  Thanks for any
> suggestions.
> 
>         Walt


Follow-Ups: References: